Wrong figures, right answers
A while back, the husband of a friend of mine got a nasty, painful rash on his face. When it got up to his eye and started to affect his vision, he went to the hospital, and after a bunch of tests they found out what was going on. I asked my friend about it when they got home, and apparently the hospital staff had been a lot less helpful than they could have. She didn’t know exactly what the problem was; she said they had called it “zoister” or something like that, and she probably wasn’t even remembering it right.
I figured she probably wasn’t, because that doesn’t sound like any disease I’ve ever heard of. So I tried punching it into Google, and sure enough, it had the answer. “Do you mean zoster?” I clicked the link, and there it was: herpes zoster, better known as shingles, the revenge of the chickenpox virus. Why the hospital folks didn’t just say “he has shingles,” I’ll never know.
It took a few days before I realized the implications of what I’d done there, though. You may have heard the famous quote from Charles Babbage:
“On two occasions I have been asked,—”Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House [of Parliament] put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
As programmers, we like to laugh at stuff like this. Oh, look at the clueless politicians who don’t know the first thing about engineering or GIGO, asking stupid questions whose answer would be obvious if they only gave it half a moment’s thought! But isn’t this what I had just done? I had deliberately, knowingly given a computer bad input, with the full expectation that it would give me the right answer, and it had done so successfully! It kind of made me think.
The members of Parliament Babbage was dealing with weren’t engineers at all; they were politicians, who were used to dealing with people, not machines. And if you say something that’s obviously wrong to a person, there’s a good chance that they might try to correct you in one way or another. It made me wonder if Babbage wasn’t missing the real point of the question: “Mr. Babbage, are you able to build a machine that has any common sense?”
Attempts to do so are by no means a new thing in computer science. The concept of “Do What I Mean” is treated with suspicion if not outright hostility by experienced programmers, and with good reason. From the original DWIM to Microsoft’s Talking Paper Clip to “robust ” HTML parsing and JavaScript’s automatic semicolon insertion, the well-meaning attempts of less-clueful designers have been screwing things up for the rest of us for decades. (That’s why I’ve always been a bit suspicious of Oxygene’s Fix-it system.) And yet, occasionally someone manages to get it right, or at least mostly right.
I think a big part of it is that Google searches work with natural language subjects, while programming is based on formal language. Natural language is understood intuitively, though, not formally, which means that adding a bit of virtual intuition into your natural language processing is a lot less likely to produce really bad results than doing the same for code. My experience wasn’t the result of a simple spell check, either; there are pages out there that match the word “zoister”; but Google’s system knew that there was a similar word that was more likely to be relevant.
This is certainly an interesting time we’re living in. Intuition has always been the major thing that sets real intelligence apart from mere computing power. With engineers now beginning to develop virtual intuition routines, how much longer before we start to see true AIs emerging?
Good read
Wow… I really enjoyed reading that…
I had never given the question of computational intuition any thought until I read your post. Now I am finding myself reading about it as often as possible.
I head up payments innovation in a large global bank and a big factor in whether something succeeds or fails is often not whether it offers a compelling value proposition for consumers and businesses but rather how intuitive it is for them to understand its value and how to extract that value (i.e. how to use it). The days of putting tons of help screens and printing thick user manuals are thankfully mostly behind us. We have always viewed the challenge as being the distinction between natural user interfaces and simple user interfaces.
The perspective that your discussion has brought is that instead of trying to simplify the formal interaction that a user might have with an app, we should be focusing more on making the app more intuitive, so that the user interaction can become more natural… informal, even.
An interesting read but you are drawing too long a bow.
“but Google’s system knew that there was a similar word that was more likely to be relevant”
Wrong. Google had no idea whether it was likely to be more or less relevant. It simple knew that it was close enough to __perhaps__ be relevant. There was no “virtual intuition” at work, only pattern matching and similarity assessments. Routine, algorithmic tasks which can give the illusion of intuition but are nothing of the sort.
Had you decided to spell the condition “zoyster” rather than “zoister”, Google’s “intuition” would have failed you (it makes no recommendation that you might be looking for “zoster” in that case).
The real problem in your case is that your search term was obtained from an unreliable source. Not her fault – the medical staff used a medical term and the non-medical recipient of the information mis-remembered it when relaying it.
Had her memory latched onto the “herpes” part of “herpes zoster”, your Google search would have been simply misleading – there is far greater interest in Herpes Simplex than in Herpes Zoster and all the results offered by Google would have been biased in that direction as a result. Herpes Zoster is not even suggested as a related search in that instance.